53 research outputs found

    Fault management and service provisioning process model of next generation access networks

    Get PDF
    Network operators are nowadays upgrading their access networks to cope with the increasing number of users and the increasing bandwidth required by services. However, cost plays a crucial factor. In order to decide which next generation access network will be implemented, an accurate cost evaluation should be performed. This evaluation requires a total cost of ownership model including a detailed model of the most costly operational processes: fault management and customer provisioning. These models help identifying the most costly sub-processes, where network providers should improve their cost efficiency. This paper presents detailed models for these operational processes and gives an approach to use them for estimating future operational costs

    Fault location algorithms for optical networks

    Get PDF
    Today, there is no doubt that optical networks are the solution to the explosion of Internet traffic that two decades ago we only dreamed about. They offer high capacity with the use of Wavelength Division Multiplexing (WDM) techniques among others. However, this increase of available capacity can be betrayed by the high quantity of information that can be lost when a failure occurs because not only one, but several channels will then be interrupted. Efficient fault detection and location mechanisms are therefore needed. Our challenge is to identify and locate failures (single or multiple) at the physical layer of an optical network in the presence of some lost and/or false alarms. After briefly introducing optical networks and the multiplexing techniques that can be used, we study the most common components and their most usual failures. We propose a classification of all the network components based on their behaviour when failures occur. This classification gives an abstract model of the optical network, which is appropriate for developing algorithms to locate faulty elements. Two algorithms that solve the fault location problem are proposed. Both algorithms cope with existence of false and missing alarms when locating single and multiple failures. The problem of locating multiple failures already in the absence of false or missing alarms, has been shown to be NP-complete. The first algorithm, which is called Alarm Filtering Algorithm (AFA) is based on the combination of two approaches: forward and backward. The forward approach returns for each network element, their domain, which is the set of network elements that will send an alarm when the considered element fails. The backward approach returns the set of elements that are directly related to the received alarms. In this approach, the alarms that are considered to provide redundant information, are discarded. The combination of the results given by both approaches allows the location of multiple failures, given an allowed number of false and missing alarms. However, this algorithm does not minimize the complexity when new alarms are received. Hence, a second algorithm, which is called Fault Location Algorithm (FLA), is proposed. The FLA concentrates the complexity in ,a pre-computation phase, so that when new alarms are received, the result of the algorithm is rapidly displayed. The FLA algorithm is based on the construction of a binary tree that realizes a non linear error correcting code. The FLA has also been extended to locate soft failures in addition to hard failures. Hard failures are unexpected failures, whereas soft failures are progressive failures due to equipment aging, misalignments or external factors such as temperature or pressure. Both algorithms are compared on some simulated networks using different network topologies and failure cases. The comparison has also be done on the basis of their worst case complexity. Some conclusions indication with which settings each algorithm perform the best, were obtained

    HeCSON: Heuristic for Configuration Selectionin Optical Network Planning

    Full text link
    We present a transceiver configuration selection heuristic combining Enhanced Gaussian Noise (EGN) models, which shows a 40\% increase in throughput and 87\% decrease in execution time, compared to only approximate EGN and Full-Form EGN respectively

    Optical Networks and Interconnects

    Full text link
    The rapid evolution of communication technologies such as 5G and beyond, rely on optical networks to support the challenging and ambitious requirements that include both capacity and reliability. This chapter begins by giving an overview of the evolution of optical access networks, focusing on Passive Optical Networks (PONs). The development of the different PON standards and requirements aiming at longer reach, higher client count and delivered bandwidth are presented. PON virtualization is also introduced as the flexibility enabler. Triggered by the increase of bandwidth supported by access and aggregation network segments, core networks have also evolved, as presented in the second part of the chapter. Scaling the physical infrastructure requires high investment and hence, operators are considering alternatives to optimize the use of the existing capacity. This chapter introduces different planning problems such as Routing and Spectrum Assignment problems, placement problems for regenerators and wavelength converters, and how to offer resilience to different failures. An overview of control and management is also provided. Moreover, motivated by the increasing importance of data storage and data processing, this chapter also addresses different aspects of optical data center interconnects. Data centers have become critical infrastructure to operate any service. They are also forced to take advantage of optical technology in order to keep up with the growing capacity demand and power consumption. This chapter gives an overview of different optical data center network architectures as well as some expected directions to improve the resource utilization and increase the network capacity

    Assessment of connectivity-based resilience to attacks against multiple nodes in SDNs

    Get PDF
    In Software Defined Networks (SDNs), the control plane of a network is decoupled from its data plane. For scalability and robustness, the logically centralized control plane is implemented by physically placing different controllers throughout the network. The determination of the number and placement of controllers is known as the Controller Placement Problem (CPP). In the regular (i.e., failure-free) state, the control plane must guarantee a given maximum delay between every switch and its primary controller and a given maximum delay between every pair of controllers. In general, these delay bounds allow multiple solutions and, so, other goals can be used to determine the best CPP solution. In this paper, we assess the connectivity-based resilience to malicious attacks against multiple network nodes of the CPP solutions obtained with three different aims: the regular state delay optimization without any concern about attacks, the regular state delay optimization taking into consideration the worst-case attacks and the resilience optimization to attacks against multiple nodes. We assess the CPP solutions considering attacks of targeted nature (when the attacker has complete knowledge of the data plane) and attacks of non-targeted nature (i.e., random and epidemic attacks). We present computational results providing an analysis of the CPP solutions to the different types of attacks. The main conclusion is that the connectivity-based resilience between the different CPP solutions strongly depends on the network topology, the regular state delay bounds and the type of attacks. Finally, we provide insights on how SDN operators can consider the conducted assessment when deciding the controller placements in their networks.publishe
    corecore